Conversation
| sql( | ||
| "INSERT INTO %s VALUES (1, float('nan'))," | ||
| + "(1, float('nan')), " | ||
| + "(1, 10.0), " |
There was a problem hiding this comment.
The bug is replicated with this change in test class.
Once this approach is okay, I shall update the testClasses in other spark versions.
There was a problem hiding this comment.
Could you explain why the bug is triggered with the addition of this row?
There was a problem hiding this comment.
Basically the bug is triggered when we have data file containing nan count, upper bound and lower bound.
Previously without that line only nan count is created, with this change upper and lower bound is generated.
There was a problem hiding this comment.
Interesting, it's a bit unclear to me why the additional row would trigger collecting lower/upper bounds. I'd have to double check if there's some minimum threshold of rows or some other condition that controls whether lower/upper is written when the footer is written. Looking at this test without the change I would've expected a lower/upper bounds of 1.0 and 2.0 respectively.
There was a problem hiding this comment.
the floating-point nature of the 10.0 value?
Seems some unrelated bug of its own.
| Long nanCount = safeGet(file.nanValueCounts(), fieldId); | ||
| if (nanCount != null && nanCount > 0) { | ||
| return false; | ||
| } |
There was a problem hiding this comment.
nit: Can we extract a small helper (e.g., hasNaNs) to keep this logic in one place?
|
cc @psvri @RussellSpitzer @huaxingao I went ahead and added this to the 1.11 milestone since it does look like a correctness issue when there are NaNs. I'm stepping through the debugger why the existing NaN test didn't really catch the problem. |
Closes #15069
I made changes in this PR Based on the iceberg spec
-NaN < -Infinity < -value < -0 < 0 < value < Infinity < NaN.When we have nanValueCount > 0 in dataFile, we make
hasValuefn return false. This in turn will makeAggregator.isValidreturn false there by on spark side we wont push down aggregation.